90 research outputs found

    Timbre as an Elusive Component of Imagery for Music

    Get PDF
    Evidence of the ability to imagine timbre is either anecdotal, or applies to isolated instrument tones rather than timbre in real music. Experiments were conducted to infer the vividness of timbre in imagery for music. Music students were asked to judge whether the timbre of a sounded target note was the same or different from the original following a heard, imagined, or control musical context. A pilot experiment manipulated instrumentation, while the main experiment manipulated sound filters. The hypothesis that participants are able to internalise timbral aspects of music was supported by an ability to perform the timbre discrimination task, and by facilitated response when imaging the timbre context compared with non-imaging. However, while participants were able to mentally represent timbre, this was not always reported as being a conscious dimension of their musical image. This finding is discussed in relation to previous research suggesting that timbre may be a sound characteristic that is optionally present in imagery for music

    A Response to Andrea R. Halpern's Commentary

    Get PDF
    The author responds to points raised in Andrea Halpern’s commentary, which appeared in Vol. 2, No. 1 of Empirical Musicology Review. Discussion focuses on the apparent contradiction between self-reports of veridical mental imagery of musical timbre, and cognitive constraints on temporal memory for multidimensional sound

    A Response to Cross & Rohrmeier's 'Comments on Facilitation and Coherence Between the Dynamic and Retrospective Perception of Segmentation in Computer-Generated Music'

    Get PDF
    The commentary by Cross and Rohrmeier (2007) attempts to locate our paper (Bailes and Dean, 2007a) as a study of timbre, and points out the ongoing development of research in this area, including attempts to define psychoacoustic thresholds of perception. However, our work is directed to understanding broader psychological phenomena such as the impact of sound duration on the perception of structure in computer music, and the concordance between real-time and retrospective measures. We discuss further our identification of an asymmetrical detection of sound segmentation, questioning the conceptual distinctions of timbre perception that Cross and Rohrmeier propose

    Comparative time series analysis of perceptual responses to electroacoustic music

    Get PDF
    This study investigates the relationship between acoustic patterns in contemporary electroacoustic compositions, and listeners’ real-time perceptions of their structure and affective content. Thirty-two participants varying in musical expertise (nonmusicians, classical musicians, expert computer musicians) continuously rated the affect (arousal and valence) and structure (change in sound) they perceived in four compositions of approximately three minutes duration. Time series analyses tested the hypotheses that sound intensity influences listener perceptions of structure and arousal, and spectral flatness influences perceptions of structure and valence. Results suggest that intensity strongly influences perceived change in sound, and to a lesser extent listener perceptions of arousal. Spectral flatness measures were only weakly related to listener perceptions, and valence was not strongly shaped by either acoustic measure. Differences in response by composition and musical expertise suggest that, particularly with respect to the perception of valence, individual experience (familiarity and liking), and meaningful sound associations mediate perception

    Facilitation and Coherence Between the Dynamic and Retrospective Perception of Segmentation in Computer-Generated Music

    Get PDF
    We examined the impact of listening context (sound duration and prior presentation) on the human perception of segmentation in sequences of computer music. This research extends previous work by the authors (Bailes & Dean, 2005), which concluded that context-dependent effects such as the asymmetrical detection of an increase in timbre compared to a decrease of the same magnitude have a significant bearing on the cognition of sound structure. The current study replicated this effect, and demonstrated that listeners (N = 14) are coherent in their detection of segmentation between real-time and retrospective tasks. In addition, response lag was reduced from a first hearing to a second hearing, and following long (7 s) rather than short (1 or 3 s) segments. These findings point to the role of short-term memory in dynamic structural perception of computer music

    Modelling Perception of Structure and Affect in Music: Spectral Centroid and Wishart's Red Bird

    Get PDF
    Pearce (2011) provides a positive and interesting response to our article on time series analysis of the influences of acoustic properties on real-time perception of structure and affect in a section of Trevor Wishart’s Red Bird (Dean & Bailes, 2010). We address the following topics raised in the response and our paper. First, we analyse in depth the possible influence of spectral centroid, a timbral feature of the acoustic stream distinct from the high level general parameter we used initially, spectral flatness. We find that spectral centroid, like spectral flatness, is not a powerful predictor of real-time responses, though it does show some features that encourage its continued consideration. Second, we discuss further the issue of studying both individual responses, and as in our paper, group averaged responses. We show that a multivariate Vector Autoregression model handles the grand average series quite similarly to those of individual members of our participant groups, and we analyse this in greater detail with a wide range of approaches in work which is in press and continuing. Lastly, we discuss the nature and intent of computational modelling of cognition using acoustic and music- or information theoretic data streams as predictors, and how the music- or information theoretic approaches may be applied to electroacoustic music, which is ‘sound-based’ rather than note-centred like Western classical music

    Time Series Analysis as a Method to Examine Acoustical Influences on Real-time Perception of Music

    Get PDF
    Multivariate analyses of dynamic correlations between continuous acoustic properties (intensity and spectral flatness) and real-time listener perceptions of change and expressed affect (arousal and valence) in music are developed, by an extensive application of autoregressive Time Series Analysis (TSA). TSA offers a large suite of techniques for modeling autocorrelated time series, such as constitute both music’s acoustic properties and its perceptual impacts. A logical analysis sequence from autoregressive integrated moving average regression with exogenous variables (ARIMAX), to vector autoregression (VAR) is established. Information criteria discriminate amongst models, and Granger Causality indicates whether a correlation might be a causal one. A 3 min electroacoustic extract from Wishart’s Red Bird is studied. It contains digitally generated and transformed sounds, and animate sounds, and our approach also permits an analysis of their impulse action on the temporal evolution and the variance in the perceptual time series. Intensity influences perceptions of change and expressed arousal substantially. Spectral flatness influences valence, while animate sounds influence the valence response and its variance. This TSA approach is applicable to a wide range of questions concerning acoustic- perceptual relationships in music

    Musical expertise and the ability to imagine loudness

    Get PDF
    Most perceived parameters of sound (e.g. pitch, duration, timbre) can also be imagined in the absence of sound. These parameters are imagined more veridically by expert musicians than non-experts. Evidence for whether loudness is imagined, however, is conflicting. In music, the question of whether loudness is imagined is particularly relevant due to its role as a principal parameter of performance expression. This study addressed the hypothesis that the veridicality of imagined loudness improves with increasing musical expertise. Experts, novices and non-musicians imagined short passages of well-known classical music under two counterbalanced conditions: 1) while adjusting a slider to indicate imagined loudness of the music and 2) while tapping out the rhythm to indicate imagined timing. Subtests assessed music listening abilities and working memory span to determine whether these factors, also hypothesised to improve with increasing musical expertise, could account for imagery task performance. Similarity between each participant's imagined and listening loudness profiles and reference recording intensity profiles was assessed using time series analysis and dynamic time warping. The results suggest a widespread ability to imagine the loudness of familiar music. The veridicality of imagined loudness tended to be greatest for the expert musicians, supporting the predicted relationship between musical expertise and musical imagery ability

    Musical expertise and the ability to imagine loudness

    Get PDF
    Most perceived parameters of sound (e.g. pitch, duration, timbre) can also be imagined in the absence of sound. These parameters are imagined more veridically by expert musicians than non-experts. Evidence for whether loudness is imagined, however, is conflicting. In music, the question of whether loudness is imagined is particularly relevant due to its role as a principal parameter of performance expression. This study addressed the hypothesis that the veridicality of imagined loudness improves with increasing musical expertise. Experts, novices and non-musicians imagined short passages of well-known classical music under two counterbalanced conditions: 1) while adjusting a slider to indicate imagined loudness of the music and 2) while tapping out the rhythm to indicate imagined timing. Subtests assessed music listening abilities and working memory span to determine whether these factors, also hypothesised to improve with increasing musical expertise, could account for imagery task performance. Similarity between each participant's imagined and listening loudness profiles and reference recording intensity profiles was assessed using time series analysis and dynamic time warping. The results suggest a widespread ability to imagine the loudness of familiar music. The veridicality of imagined loudness tended to be greatest for the expert musicians, supporting the predicted relationship between musical expertise and musical imagery ability

    A New Method of Onset and Offset Detection in Ensemble Singing

    Get PDF
    This paper presents a novel method combining electrolaryngography and acoustic analysis to detect the onset and offset of phonation as well as the beginning and ending of notes within a sung legato phrase, through the application of a peak-picking algorithm, TIMEX. The evaluation of the method applied to a set of singing duo recordings shows an overall performance of 78% within a tolerance window of 50 ms compared with manual annotations performed by three experts. Results seem very promising in light of the state-of-the-art techniques presented at MIREX in 2016 yielding an overall performance of around 60%. The new method was applied to a pilot study with two duets to analyse synchronization between singers during ensemble performances. Results from this investigation demonstrate bidirectional temporal adaptations between performers, and suggest that the precision and consistency of synchronization, and the tendency to precede or lag a co-performer might be affected by visual contact between singers and leader–follower relationships. The outcomes of this paper promise to be beneficial for future investigations of synchronization in singing ensembles
    • …
    corecore